Goto

Collaborating Authors

 weight distribution


Structural Concentration in Weighted Networks: A Class of Topology-Aware Indices

Riso, L., Zoia, M. G.

arXiv.org Machine Learning

This paper develops a unified framework for measuring concentration in weighted systems embedded in networks of interactions. While traditional indices such as the Herfindahl-Hirschman Index capture dispersion in weights, they neglect the topology of relationships among the elements receiving those weights. To address this limitation, we introduce a family of topology-aware concentration indices that jointly account for weight distributions and network structure. At the core of the framework lies a baseline Network Concentration Index (NCI), defined as a normalized quadratic form that measures the fraction of potential weighted interconnection realized along observed network links. Building on this foundation, we construct a flexible class of extensions that modify either the interaction structure or the normalization benchmark, including weighted, density-adjusted, null-model, degree-constrained, transformed-data, and multi-layer variants. This family of indices preserves key properties such as normalization, invariance, and interpretability, while allowing concentration to be evaluated across different dimensions of dependence, including intensity, higher-order interactions, and extreme events. Theoretical results characterize the indices and establish their relationship with classical concentration and network measures. Empirical and simulation evidence demonstrate that systems with identical weight distributions may exhibit markedly different levels of structural concentration depending on network topology, highlighting the additional information captured by the proposed framework. The approach is broadly applicable to economic, financial, and complex systems in which weighted elements interact through networks.



eec7fee9a8595ca964b9a11562767345-Supplemental-Conference.pdf

Neural Information Processing Systems

A.1 ModelArchitecture The architecture of the SinGAN used in our paper follows that in [4]. The trade-off parameter in WGAN-GP [3] is set to0.1 for gradient penalty. Adam[5]isadoptedasthe stochastic optimizer with aninitial learning rate of0.0005and adecay factor of0.1after finishing 80% of iterations, and we set the maximum number of training iterations to2,000. C.2 Per-StageWeightDistribution In addition to total weight distribution, the comparison of per-stage weight distribution is also provided.








Uncertainty Reasoning with Photonic Bayesian Machines

Brückerhoff-Plückelmann, F., Borras, H., Hulyal, S. U., Meyer, L., Ji, X., Hu, J., Sun, J., Klein, B., Ebert, F., Dijkstra, J., McRae, L., Schmidt, P., Kippenberg, T. J., Fröning, H., Pernice, W.

arXiv.org Artificial Intelligence

Artificial intelligence (AI) systems increasingly influence safety-critical aspects of society, from medical diagnosis to autonomous mobility, making uncertainty awareness a central requirement for trustworthy AI. We present a photonic Bayesian machine that leverages the inherent randomness of chaotic light sources to enable uncertainty reasoning within the framework of Bayesian Neural Networks. The analog processor features a 1.28 Tbit/s digital interface compatible with PyTorch, enabling probabilistic convolutions processing within 37.5 ps per convolution. We use the system for simultaneous classification and out-of-domain detection of blood cell microscope images and demonstrate reasoning between aleatoric and epistemic uncertainties. The photonic Bayesian machine removes the bottleneck of pseudo random number generation in digital systems, minimizes the cost of sampling for probabilistic models, and thus enables high-speed trustworthy AI systems.